1
Knowledge Representation and Reasoning (KRR)
PolyU COMP5511 Lecture 4
00:44

INTRODUCTION

Welcome to Lesson 4: Knowledge Representation and Reasoning (KRR). In this module, we address the fundamental challenge of Artificial Intelligence: how to model the world symbolically. It is not enough for a machine to store data; it must reason about it. We will explore how AI systems represent information logically to perform inference, moving beyond simple pattern matching.

SECTION 1: Historical Foundations

We will traverse the technical landscape from classical Propositional Logic and First-Order Logic to the rigid yet powerful structures of legacy Expert Systems. These systems provided the first "thinking" machines capable of logical deduction.

SECTION 2: Modern Convergence

Finally, we arrive at the cutting edge of modern AI, examining Knowledge Graphs and Neuro-Symbolic AI. This emerging field aims to fuse the strict explainability of logic with the adaptive learning capabilities of neural networks.

Context Alert
Unlike neural networks which function as "black boxes," KRR focuses on white-box models where the reasoning path is explicit, verifiable, and interpretable.
Symbolic Logic Syntax Example
1
Fact: Parent(Alice, Bob)
2
Fact: Parent(Bob, Charlie)
3
Rule: x,y,z (Parent(x, y) Parent(y, z) Grandparent(x, z))
4
Inference: Grandparent(Alice, Charlie)
Case Study: The Medical Diagnostician
Read the scenario below and answer the questions.
Early AI systems like MYCIN used KRR to diagnose blood infections. Unlike modern ML which guesses based on statistics, MYCIN used 600+ rules derived from doctors.
Q1
1. Why is explainability critical in a medical KRR system compared to a generic image classifier?
Answer:
In medicine, doctors require a verifiable path (the chain of rules used) to trust a diagnosis. A 'black-box' prediction is unacceptable for critical decisions. KRR provides this explicit reasoning path.
Q2
2. How does the system handle a rule like "If fever is high, THEN infection is likely"?
Answer:
This rule is represented symbolically (e.g., HighFeverLikelyInfection). The Inference Engine checks if the fact HighFever is true in the patient's record; if so, it asserts LikelyInfection as a new conclusion.
Q3
3. Identify the limitations of manually encoding these rules (The Knowledge Acquisition Bottleneck).
Answer:
The primary limitation is the Knowledge Acquisition Bottleneck: the difficulty and time required for human experts to articulate all their knowledge into formal, explicit rules. Real-world knowledge is often ambiguous and too vast for manual encoding.